Re: [GENERAL] slow inserts and updates on large tables
От | Herouth Maoz |
---|---|
Тема | Re: [GENERAL] slow inserts and updates on large tables |
Дата | |
Msg-id | l03110700b2f036cdd928@[147.233.159.109] обсуждение исходный текст |
Ответ на | slow inserts and updates on large tables (jim@reptiles.org (Jim Mercer)) |
Ответы |
Re: [GENERAL] slow inserts and updates on large tables
|
Список | pgsql-general |
At 5:02 +0200 on 17/2/99, Jim Mercer wrote: > if anyone has any pointers as to why this is so slow, lemme know. Have you checked the usual stuff: 1) Each insert and update creates its own transaction. That is, an insert is in fact: BEGIN TRANSACTION; INSERT...; COMMIT; So, to make things faster you should BEGIN TRANSACTION explicitly before all the inserts and COMMIT after them. Or separate into manageable bulks if you run into a memory problem. 2) Indexing tables mean faster SELECTs at the expense of slower INSERTs and UPDATEs. There is no magic. The reasoning is that normally you query the data a lot more than you change it. Thus, it is preferable, before doing bulk inserts, to drop the indices and recreate them afterwards. This is true when you are not expectind the database to be queried at the same time the inserts are made. As for updates, it's trickier, because you actually use the index for the WHERE part of the update. If speed is of an essence, I would probably try the following: SELECT * INTO TABLE temp_table FROM your_table WHERE update_condition; DELETE FROM your_table WHERE update_condition; DROP INDEX...; INSERT INTO your_table SELECT ... FROM temp_table; -- update within select CREATE INDEX...; 3) Back to the issue of INSERTS - copies are faster. If you can transform the data into tab-delimited format as required by COPY, you save a lot of time on parsing, planning etc. Herouth -- Herouth Maoz, Internet developer. Open University of Israel - Telem project http://telem.openu.ac.il/~herutma
В списке pgsql-general по дате отправления: